Causality in Judgment 1 Running head: CAUSALITY IN JUDGMENT The Role of Causality in Judgment Under Uncertainty
نویسندگان
چکیده
Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explain the success and flexibility of people's real-world judgments, and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as "base-rate neglect", may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms. Causality in Judgment 3 The Role of Causality in Judgment Under Uncertainty Everywhere in life, people are faced with situations that require intuitive judgments of probability. How likely is it that this person is trustworthy? That this meeting will end on time? That this pain in my side is a sign of a serious disease? Survival and success in the world depend on making judgments that are as accurate as possible given the limited amount of information that is often available. To explain how people make judgments under uncertainty, researchers typically invoke a computational framework to clarify the kinds of inputs, computations, and outputs that they expect people to use during judgment. We can view human judgments as approximations (sometimes better, sometimes worse) to modes of reasoning within a rational computational framework, where a computation is “rational” to the extent that it provides adaptive value in real-world tasks and environments. However, there is more than one rational framework for judgment under uncertainty, and behavior that looks irrational under one framework may look rational under a different framework. Because of this, evidence of “errorprone” behavior as judged by one framework may alternatively be viewed as evidence that a different rational framework is appropriate. This paper considers the question of which computational framework best explains people’s judgments under uncertainty. To answer this, we must consider what kinds of realworld tasks and environments people encounter, which frameworks are best suited to these environments (i.e., which we should take to be normative), and how well these frameworks predict people’s actual judgments under uncertainty (i.e., which framework offers the best descriptive model). We will propose that a causal Bayesian framework, in which Bayesian inferences are made over causal models, represents a more appropriate normative standard and a more accurate descriptive model than previous frameworks for judgment under uncertainty. Causality in Judgment 4 The plan of the paper is as follows. We first review previous accounts of judgment under uncertainty, followed by the arguments for why a causal Bayesian framework provides a better normative standard for human judgment. We then present four experiments supporting the causal Bayesian framework as a descriptive model of people's judgments. Our experiments focus on the framework's ability to explain when and why people exhibit base-rate neglect, a well-known judgment phenomenon that has often been taken as a violation of classical Bayesian norms. Specifically, we test the hypotheses that people's judgments can be explained as approximations to Bayesian inference over appropriate causal models, and that base-rate neglect often occurs when experimenter-provided statistics do not map clearly onto parameters of the causal model participants are likely to invoke. We conclude by discussing implications of the causal Bayesian framework for other phenomena in probabilistic reasoning, and for improving the teaching of statistical reasoning. Statistical frameworks for judgment under uncertainty Most previous accounts – whether arguing for or against human adherence to rationality – take some framework of statistical inference to be the normative standard (Anderson, 1990; Gigerenzer & Hoffrage, 1995; McKenzie, 2003; Oaksford & Chater, 1994; Peterson & Beach, 1967; Shepard, 1987; Tversky & Kahneman, 1974). Statistical inference frameworks generally approach the judgment of an uncertain variable, such as whether someone has a disease, by considering both the current data, such as the person’s symptoms, as well as past co-occurrences of the data and the uncertain variable, such as previous cases of patients with the same symptoms and various diseases. Because these frameworks focus on observations rather than knowledge, beliefs about the causal relationships between variables does not play a role in inference. Causality in Judgment 5 Using statistical inference frameworks as a rational standard, several hypotheses have been advanced to describe how people make judgments under uncertainty. Early studies of judgment suggested that people behaved as “intuitive statisticians” (Peterson & Beach, 1967), because their judgments corresponded closely to classical Bayesian statistical norms, which were presumed rational. Classical Bayesian norms explain how prior beliefs may be updated rationally in light of new data, via Bayes’ rule. To judge ( | ) P H D , the probability of an uncertain hypothesis H given some data D, Bayes’ rule prescribes a rational answer, as long as one knows (1) ( ) P H , the prior degree of belief in H, and (2) ( | ) P D H and ( | ) P D H ¬ , the data expected if H were true and if H were false: ( ) ( | ) ( | ) ( ) P H P D H P H D P D = ( 1 ) where ( ) ( ) ( | ) ( ) ( | ) P D P H P D H P H P D H = + ¬ ¬ . The intuitive statistician hypothesis did not reign for long. It was not able to account for a rapidly accumulating body of experimental evidence that people reliably violate Bayesian norms (Ajzen, 1977; Bar-Hillel, 1980; Eddy, 1982; Lyon & Slovic, 1976; Nisbett & Borgida, 1975; Tversky & Kahneman, 1974). For example, consider the “mammogram problem”, a Bayesian diagnosis problem which even doctors commonly fail (Eddy, 1982). One well-tested version comes from Gigerenzer and Hoffrage (1995), adapted from Eddy (1982), in which participants were told that the probability of breast cancer for any woman getting a screening is 1%, that 80% of women with cancer receive a positive mammogram, and that 9.6% of women without cancer also receive a positive mammogram. Participants were then asked to judge the likelihood that a woman who receives a positive mammogram actually has cancer. Participants often give answers of 70%-90% (Eddy, 1982; Gigerenzer & Hoffrage, 1995), while Bayes’ theorem prescribes a Causality in Judgment 6 much lower probability of 7.8%. In this case, H is “patient X has breast cancer”, D is “patient X received a positive mammogram”, and the required task is to judge ( | ) P H D , the probability that the patient has breast cancer given that she received a positive mammogram: ( ) ( | ) 1% 80% ( | ) 7.8% ( ) ( | ) ( ) ( | ) 1% 80% 99% 9.6% P H P D H P H D P H P D H P H P D H × = = = + ¬ ¬ × + × ( 2 ) Kahneman and Tversky (1973) characterized the source of such errors as “neglect” of the base rate (in this case, the rate of cancer), which should be used to set P(H) in the above calculation of ( | ) P H D (in this case, the probability of cancer given a positive mammogram). 1 The heuristics and biases view, which came to replace the intuitive statistician framework, sought to understand probabilistic judgments as heuristics, which approximate normative Bayesian statistical methods in many cases, but lead to systematic errors in others (Tversky & Kahneman, 1974). Given the focus of the heuristics and biases program on judgment errors, many concluded that people were ill-equipped to reason successfully under uncertainty. Slovic, Fischhoff, and Lichtenstein (1976) wrote: “It appears that people lack the correct programs for many important judgmental tasks.... it may be argued that we have not had the opportunity to evolve an intellect capable of dealing conceptually with uncertainty” (p. 174). Yet by the standards of engineered artificial intelligence systems, the human capacity for judgment under uncertainty is prodigious. People, but not computers, routinely make successful uncertain inferences on a wide and flexible range of complex real-world tasks. As Glymour (2001) memorably asked, “If we’re so dumb, how come we’re so smart?” (p. 8). Research in the heuristics and biases tradition generally did not address this question in a satisfying way. These previous paradigms for analyzing judgment under uncertainty, including the Heuristics & Biases program (Tversky & Kahneman, 1974) and the Natural Frequency view (Gigerenzer & Hoffrage, 1995), have one commitment in common: they accept the Causality in Judgment 7 appropriateness of traditional statistical inference as a rational standard for human judgment. Purely statistical methods are best suited to reasoning about a small number of variables based on many observations of their patterns of co-occurrence – the typical situation in ideally controlled scientific experiments. In contrast, real-world reasoning typically involves the opposite scenario: complex systems with many relevant variables and a relatively small number of opportunities for observing their co-occurrences. Because of this complexity, the amount of data required for reliable inference with purely statistical frameworks, which generally grows exponentially in the number of variables, is often not available in real-world environments. The conventional statistical paradigms developed for idealized scientific inquiry may thus be inappropriate as rational standards for human judgment in real-world tasks. Proposing heuristics as descriptive models to account for deviations from statistical norms only clouds the issue, as there is no way to tell whether an apparent deviation is a poor heuristic approximation to a presumed statistical norm, or a good approximation to some more adaptive approach. We will propose that a causal Bayesian framework provides this more adaptive approach, and that it offers both a better normative standard than purely statistical methods and a better descriptive model than heuristic accounts. Like classical statistical norms, the framework we propose is Bayesian, but rather than making inferences from purely statistical data, inferences in our framework are made with respect to a causal model, and are subject to the constraints of causal domain knowledge. Our causal Bayesian framework is more adaptive than previous proposals, as it explains how rational judgments can be made with the relatively limited statistical data that is typically available in real-world environments. This approach also represents a better descriptive model than purely statistical norms or heuristics, which do not Causality in Judgment 8 emphasize, or even have room to accommodate, the kinds of causal knowledge that seem to underlie much of people’s real-world judgment. We study the role of causal knowledge in judgment by focusing on how it modulates the classic phenomenon of “base-rate neglect”. Early studies indicated that people were more likely to neglect base rates that lack “causal relevance” (Ajzen, 1977; Tversky & Kahneman, 1980), although the notion of causal relevance was never well defined. Bar-Hillel (1980) argued that the salience of the base rate determined whether people would use this information, and that causal relevance was just one form of salience. Contrary to the conclusions of the heuristics & biases literature, we argue that for many well-known stimuli, the features of the base rate are not what lead people to exhibit apparent “base-rate neglect”. We offer as evidence four experiments in which the description of the base rate is identical across two conditions, but people neglect the base rate in one condition and use it appropriately in the second condition. We further argue that people in these experiments may not actually be misusing base rate statistics; we show how cases of “base-rate neglect” may be re-interpreted as cases in which the prescriptions of classical Bayesian norms are non-normative by the standards of causal Bayesian inference. Our experiments will show that when these prescriptive norms agree, people often use the given statistics normatively (by both standards), but when they disagree, people’s judgments more often adhere to the causal Bayesian standard than the classical Bayesian standard. Furthermore, when the problem makes clear which causal model should be used and how given statistics should be incorporated into that model, we find that people rarely neglect base rates. As indicated above, we are not the first to propose that causal knowledge plays a role in base-rate neglect. Researchers in the heuristics and biases tradition investigated how causal factors influenced the phenomenology of base-rate neglect, but they offered no precise or Causality in Judgment 9 generalizable models for how causal knowledge and probabilistic judgment interact, and they did not explore the rational role of causal reasoning in judgment under uncertainty. Ajzen (1977) proposed that a “causality heuristic” leads to neglect of information that has no apparent causal explanation. Following Ajzen (1977), Tversky and Kahneman (1980) proposed that “evidence that fits into a causal schema is utilized, whereas equally informative evidence which is not given a causal interpretation is neglected” (Tversky & Kahneman, 1980, p. 65). However, neither explained how a focus on causal factors could lead to successful judgments in the real world. They did not attempt to explain why people would have such a heuristic or why it should work the way that it does. On the contrary, the heuristics and biases tradition did not appear to treat attention to causal structure as rational or adaptive. The use of causal schemas was instead viewed as an intuitive, fuzzy form of reasoning that, to our detriment, tends to take precedence over normative statistical reasoning when given the chance. In contrast to Tversky and Kahneman’s (1980) undefined “causal schemas”, our proposal for inference over causal models based on Bayesian networks provides a well-defined, rational, and adaptive method for judgment under uncertainty, which can succeed in real-world tasks where noncausal statistical methods fail to apply. We will argue that people’s judgments can in fact be both causally constrained and rational – and rational precisely because of how they exploit causal knowledge. A causal Bayesian framework for judgment under uncertainty Causal reasoning enables one to combine available statistics with knowledge of causal relationships, resulting in more reliable judgments, with less data required than purely statistical methods. It is becoming clear from research in artificial intelligence (Pearl, 2000), associative learning (Cheng, 1997; Glymour, 2001; Gopnik & Glymour, 2002; Gopnik & Sobel, 2000; Waldmann, 1996), and categorization (Ahn, 1999; Rehder, 2003) that causal reasoning methods Causality in Judgment 10 are often better suited than purely statistical methods for inference in real-world environments. Causal Bayesian networks have been proposed as tools for understanding how people intuitively learn and reason about causal systems (e.g. Glymour & Cheng, 1998; Gopnik, et al., 2004; Griffiths & Tenenbaum, 2005; Sloman & Lagnado, in press; Steyvers, Tenenbaum, Wagenmakers & Blum, 2003; Tenenbaum & Griffiths, 2001, 2003; Waldmann, 2001), but their implications for more general phenomena of judgment under uncertainty have not been systematically explored We see the present paper as a first attempt in this direction, with a focus on explaining base-rate neglect, one of the best-known phenomena of modern research on probabilistic judgment. This section outlines the theoretical background for our work. A full and formal treatment of causal Bayesian networks is beyond the scope of this paper, so we begin by summarizing the main aspects of the causal Bayesian framework that our experiments build on. We then argue that this framework represents a better normative standard for judgment under uncertainty in real-world environments than the purely statistical frameworks that have motivated previous research on human judgment. Finally, we will illustrate how causal Bayesian reasoning implies that a given piece of statistical information, such as a base rate, may be used differently depending on the reasoner’s causal model. In particular, we describe two ways in which the causal Bayesian framework may predict cases of apparent base-rate neglect: a given statistic may not always fit into a person’s causal model, or it may fit in such a way that the structure of the causal model dictates that it is not relevant to a certain judgment at hand. We will study human judgment as an approximation to the following ideal analysis. We assume that a causal mental model can be represented by a Bayesian network (Pearl, 2000), a directed graphical probabilistic model in which nodes correspond to variables and edges Causality in Judgment 11 correspond to direct causal influences. A set of parameters associated with each node defines a conditional probability distribution for the corresponding variable, conditioned on the values of its parents in the causal graph (i.e., its direct causes). Loosely speaking, the edge structure of the graph specifies what causes what, while the parameters specify how effects depend probabilistically on their causes. The product of the conditional distributions associated with each node defines a full joint probability distribution over all variables in the system. Any probabilistic judgment of interest can be computed by manipulating this joint distribution in accordance with Bayes’ rule. When people are confronted with a judgment task, they face three distinct aspects of judgment: (1) constructing a causal model, (2) setting the model’s parameters, and (3) inferring probabilities of target variables via Bayesian inference over the model. We will illustrate this process using a scenario we created for Experiment 1, a version of the classic problem in which participants are asked to judge the probability that a woman receiving a positive mammogram actually has breast cancer, which we call the “benign cyst” scenario. The text of the scenario is shown in Figure 1, along with the three stages of judgment. The first step is to construct a causal model (see Figure 1a) relating the variables described in the task. In this case, information is given in the task description that benign cysts (the variable cyst) can cause positive mammograms (the variable +M). Often, however, people rely on prior knowledge of which variables cause which others. For example, most participants already know that breast cancer (the variable cancer) causes positive mammograms (the variable +M). The second step is to set the values of the parameters characterizing the relationships between cause and effect (see Figure 1b). In this case precise numerical values for some parameters can determined from the statistical information provided in the judgment task (e.g., Causality in Judgment 12 the base rate of breast cancer is 2% and the base rate of benign cysts is approximately 6%, neglecting the very low probability that a woman has both breast cancer and benign cysts). Background knowledge may be necessary to supply values for other parameters, such as how the effect depends on its causes. In this case, one might assume that positive mammograms do not occur unless caused, that the causes act independently to generate positive mammograms, and that both of the given causes are fairly strong. These assumptions can be captured using a noisyor parameterization (Pearl, 1988), a simple model to characterize independent generative causes that is equivalent to the parameterizations used in Cheng’s (1997) power PC theory or Rehder’s (2003) causal models for categorization. For simplicity, in Figure 1 we assume that either cause when present produces the effect with probability near 1. Once the causal structure of the model has been determined and parameters have been specified, inference can be performed on this model to make judgments about unknown variables (see Figure 1c). Bayesian reasoning prescribes the correct answer for any judgment about the state of one variable given knowledge about the states of other variables. (e.g., given knowledge that a woman received a positive mammogram, the probability that she has cancer as opposed to a benign cyst is approximately 25%). Expert systems based on Bayesian networks (Pearl, 1988) have traditionally been built and used in just this three-stage fashion: an expert provides an initial qualitative causal model, objectively measured statistics determine the model’s parameters, and inference over the resulting Bayesian network model automates the expert’s judgments about unobserved events in novel cases. This top-down, knowledge-based view of how causal models are constructed is somewhat different from much recent work on causal learning in psychology, which emphasizes more bottom-up mechanisms of statistical induction from data (e.g., Glymour, 2001). As we will Causality in Judgment 13 argue, however, the prior causal knowledge that people bring to a judgment task is essential for explaining how those judgments can be successful in the real world, as well as for determining when and how certain statistics given in a task (such as base rates) will affect people’s judgments. Causal Bayesian inference as a new normative standard This causal Bayesian framework may provide a more reasonable normative standard for human judgment than classical, purely statistical norms that do not depend on a causal analysis. Our argument follows in the tradition of recent rational analyses of cognition (Anderson, 1990; Oaksford & Chater, 1999). Causal Bayesian reasoning is often more ecologically adaptive, because it can leverage available causal knowledge to make appropriate judgments even when there is not sufficient statistical data available to make rational inferences under non-causal Bayesian norms. The structure of the causal model typically reduces the number of numerical parameters that must be set in order to perform Bayesian inference. In general, for a system of N variables, standard Bayesian inference using the full joint distribution requires specifying 2 N -1 numbers, while a causally structured model could involve considerably fewer parameters. For instance, in Figure 1, only four parameters are needed to specify the joint distribution among three variables, which would require seven numbers if we were using conventional Bayesian reasoning. The simplifications come from several pieces of qualitative causal knowledge: that there is no direct causal connection between having breast cancer and having benign cysts, that breast cancer and benign cysts act to produce positive mammograms through independent mechanisms (and hence each cause requires only one parameter to describe its influence on the effect of a positive mammogram), and that there are no other causes of positive mammograms. Causality in Judgment 14 In more complex, real-world inference situations there are often many relevant variables, and as the number of variables increases this difference becomes more dramatic. We cannot generally expect to have sufficient data available to determine the full joint distribution over all these variables in a purely statistical, non-causal fashion, with the number of degrees of freedom in this distribution increasing exponentially with the number of variables. Causal Bayesian inference provides a way to go beyond the available data, by using causal domain knowledge to fill in the gaps where statistical data is lacking. Causal Bayesian inference as a descriptive model The Causal Bayesian framework can be used to explain human judgment in the “base rate neglect” literature and in our experiments. Because judgments are made using causal models, rather than statistics alone, experimenter-provided statistics should be used differently depending on the causal structure one believes to be underlying them. Two cases are of particular importance to base rate neglect, and will be explored in our experiments. First, statistics can often map clearly onto one causal model, but not another. For example, suppose we are provided with statistical data indicating that the risk of breast cancer (C) is associated with two lifestyle factors, being childless (L) and having high stress levels (S), but we do not have the full joint distribution among the three variables. Suppose further that we know that stress causes cancer, but we do not know how being childless and having cancer are causally related. Three causal models are consistent with this knowledge (see Figure 2), but they have different parameters, which means a given statistic may fit clearly into one model but not into another. Suppose for instance we are given the statistic for the probability of high stress levels given that one is childless, (e.g., ( | ) 0.75 P S L = ). For Figure 2b, the given statistic corresponds directly to a model parameter, thus it can be directly assigned to that parameter. For Figure 2c, Causality in Judgment 15 there is no model parameter corresponding to ( | ) P S L , but there is a parameter corresponding to its inverse, ( | ) P L S , hence one can assign the formula ( | ) ( ) / ( ) P S L P L P S to the parameter for ( | ) P L S . For Figure 2a, ( | ) P S L does not correspond directly to a parameter of the model or its inverse, which means there is no single prescription for how such a statistic will influence future judgments from this model. In Experiments 1, 2, and 3, we test the hypothesis that statistics that map clearly onto parameters of the causal model are more likely to be used appropriately, while statistics that do not have corresponding parameters in the model are more likely to be used incorrectly or ignored. Second, even when provided statistics can be clearly mapped to parameters, the causal Bayesian framework prescribes different ways of using those statistics in making judgments depending on the causal structure. Suppose we are told that a particular woman is childless, and we are asked to judge the likelihood of her having cancer. If being childless causes breast cancer (Figure 2a), then the risk of cancer in a childless woman is increased, regardless of whether or not she has high stress levels (assuming for simplicity that these factors do not interact). However, it could be the case that being childless causes women to develop high stress levels (Figure 2b), but does not directly cause cancer. In this case, the risk of cancer in a childless woman is still increased, but we can ignore the fact that she is childless if we know her level of stress. Finally, it might be the case that having high stress levels causes women not to have children (Figure 2c). In this case, we should again ignore the fact that a woman is childless if we already know the woman’s level of stress. These principles of causal structure are intuitively sound, but the notion that statistical data should be used differently for different causal structures is beyond the scope of classical statistical norms. The causal Bayesian standard is able make inferences where previous standards could not, prescribing the appropriate use of limited data by Causality in Judgment 16 making use of the conditional independence relations determined by causal structure. Experiment 4 tests the applicability of this principle to human judgment. We test the hypothesis that people will ignore a given piece of statistical data in a case where it is rational to do so given the causal structure of the task, but they will use that same statistic if the causal structure is slightly modified to suggest that it is relevant to the judgment at hand. Experiments Psychological studies of judgment under uncertainty typically provide people with statistical data and then ask them to make judgments using the provided statistics, but if several different causal models are possible, this information may not be sufficient for the causal Bayesian framework to prescribe a unique correct answer. The normatively correct inference depends both on the particular causal model used and on how the statistics are assigned to the parameters of the model. Therefore it is only possible to prescribe a single correct answer using the causal Bayesian framework if (1) the model structure is known, (2) the provided statistics map unambiguously onto model parameters, and (3) no free parameters remain after assigning the provided statistics to the problem. The issue of how provided statistics are used to update the parameters of a model, and how they are then used in subsequent inferences, plays a central role in our experiments. In each of the following four experiments we test the extent to which people’s judgments conform to the prescriptions of causal Bayesian inference by providing a scenario in which the statistics clearly map onto the parameters of an unambiguous causal model. In Experiments 1-3 we compare these judgments to those on an equivalent scenario from the base-rate neglect literature in which the statistics do not map clearly onto parameters of the causal model. In Experiment 4, we compare these judgments to those on an equivalent scenario Causality in Judgment 17 with a different causal structure, in which the base rate statistic is rendered irrelevant to the judgments. In each experiment, the formal statistical structures of the two scenarios were always identical from the point of view of the classical Bayesian norm, thus the judgment prescribed by this norm is the same for the two scenarios. Furthermore, all other factors previously identified as playing a role in base-rate neglect (such as salience or causal relevance of the base rate) were held constant, thus the heuristics & biases view would predict that people exhibit identical levels of base rate neglect in the two scenarios. Crucially, however, the two scenarios always differ in their causal structure, such that the correct answers prescribed by our new causal Bayesian norm differ across scenarios. Thus, only the causal Bayesian framework predicts that people’s judgments will differ between the two scenarios. In addition, only the causal Bayesian framework predicts that people will exhibit less base-rate neglect on the scenario with a clear parameter mapping and a causal structure that requires that the base rate be used. Experiment 1 In the original mammogram problem (Eddy, 1982; Gigerenzer & Hoffrage, 1995) the base rate of cancer in the population often appears to be neglected when people judge the likelihood that a woman who receives a positive mammogram has cancer. Figure 3(a-c) depict the three phases of inference for the mammogram scenario. A causal model of this scenario constructed from common knowledge should include cancer as a cause of positive mammograms (+M), as depicted Figure 3a. In this model, the variable cancer has no parents, therefore the CPT for cancer contains just one parameter: ( ) P cancer , which directly corresponds to the base rate provided in the problem. Because there is only one way to assign the base rate to this model parameter, the base rate should influence judgments by causal Bayesian standards. Causality in Judgment 18 In this experiment, we demonstrate empirically that people do not neglect this base rate in a newly developed scenario that differs only in causal structure, and we argue that the real difficulty people have in the classic version is with the false-positive statistic: the probability of a positive mammogram in a women who does not have cancer, ( | ) 9.6% + ¬ = P M cancer . On classic versions of this task, we hypothesize that people may adopt a causal model in which the false-positive statistic does not correspond to a model parameter. If people assume a noisy-or parameterization, as we used in constructing the causal model in Figure 1, the model will have a parameter for the ( | ) P M cancer + but not for ( | ) P M cancer + ¬ ; this reflects the intuition that the absence of a cause has no power to produce an effect (see Figure 3b). Although it would be possible to accommodate the false-positive statistic within a noisy-or parameterization by hypothesizing an additional cause of positive mammograms, and interpreting this statistic as the causal power of that alternative cause, this represents several steps of hypothetical reasoning that might not occur to many people. 2 Many participants may realize that the false positive statistic is somehow relevant, and if they cannot fit it into their model, they may look for some simple way to use it to adjust their judgment. For example, subtracting the false-positive rate from the true positive rate would be one such strategy, consistent with typical responses classified as “baserate neglect”. To test our hypothesis about when people can use base rates properly in causal inference, we developed a new scenario in which the causal model is clear and all the statistics clearly map onto parameters of the model. We clarified the causal structure by providing an explicit alternative cause for positive mammograms in women who do not have cancer: benign cysts (see Figure 1). We replaced the false-positive rate in the original problem with the base rate of dense but harmless cysts, and described the mechanism by which these cysts generate positive Causality in Judgment 19 mammograms. This new statistic, the base rate of benign cysts in the population, directly maps onto the parameter for ( ) P cyst in the model (see Figure 1b).
منابع مشابه
Core Inflation and Economic Growth, Does Nonlinearity Matters? A Nonlinear Granger Causality Analysis
T his empirical analysis endeavors to trace out the causal nexus between core inflation and economic growth from the perspective of twenty worlds’ leading economy with the help of the nonlinear Granger causality approach by using time series data from 1981 to 2016. Based on nonlinear Granger causality results, it has been found that there is unidirectional casualty running from core ...
متن کاملEffects of Oil Returns and External Debt on the Government Expenditure: A Case Study of Syria
This study attempts to investigate the effect of oil returns and external debt on the government expenditure in Syria over the period 1970-2010. The Johansen cointegration test showed that oil returns and external debt have a positive and significant long run relationship with government expenditure. The Granger causality test indicates unidirectional short-run causality relationships running f...
متن کاملYoung Children’s Attribution of Causality under Uncertainty
In previous studies, causal contingencies have been suggested to play an important role in causality judgments. However, little is known about how children might use causal contingencies to inform their judgments of causality, especially under uncertainty. In the current study, we found that young children are sensitive to both conditional and unconditional causal contingencies. Furthermore, ch...
متن کاملA Structural Approach to the Understanding of Causes , Effects , and Judgment
Judea Pearl is Professor Emeritus of Computer Science and Statistics and Director of the Cognitive Systems Laboratory at University of California, Los Angeles. He has authored the books Heuristics (1984) and Probabilistic Reasoning in Intelligent Systems (1988). He has also published close to 300 articles on automated reasoning, learning, and inference. In 1999, he was awarded the IJCAI Researc...
متن کاملInvestigating Causal Relationship between Financial Development Indicators and Economic Growth: Toda and Yamamoto Approach
T he Causal relationship between financial development and economic growth has received divergent views in the literature under the traditional Granger approach to causality using data from various countries. The more recent Toda and Yamamoto and Dolado and Lütkepohl (TYDL) approach to causality were used to investigate the causal relationship between financial development and econom...
متن کاملRevisiting Energy-GDP Nexus for the Selected Countries of the Middle East Region
This paper investigates the relationship between total energy consumption and GDP in six countries of the Middle East , including Iran,Pakistan,Saudi Arabia,Oman,Bahrain and the United Arab Emirates. The data are annual and spanning the period 1980-2012.We employed Hsiao’s (1981) methodology to examine causality relation between total energy consumption and GDP.The empirical findings show a uni...
متن کامل